Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译
In this work, we focus on instance-level open vocabulary segmentation, intending to expand a segmenter for instance-wise novel categories without mask annotations. We investigate a simple yet effective framework with the help of image captions, focusing on exploiting thousands of object nouns in captions to discover instances of novel classes. Rather than adopting pretrained caption models or using massive caption datasets with complex pipelines, we propose an end-to-end solution from two aspects: caption grounding and caption generation. In particular, we devise a joint Caption Grounding and Generation (CGG) framework based on a Mask Transformer baseline. The framework has a novel grounding loss that performs explicit and implicit multi-modal feature alignments. We further design a lightweight caption generation head to allow for additional caption supervision. We find that grounding and generation complement each other, significantly enhancing the segmentation performance for novel categories. We conduct extensive experiments on the COCO dataset with two settings: Open Vocabulary Instance Segmentation (OVIS) and Open Set Panoptic Segmentation (OSPS). The results demonstrate the superiority of our CGG framework over previous OVIS methods, achieving a large improvement of 6.8% mAP on novel classes without extra caption data. Our method also achieves over 15% PQ improvements for novel classes on the OSPS benchmark under various settings.
translated by 谷歌翻译
在本文中,我们专注于探索有效的方法,以更快,准确和域的不可知性语义分割。受到相邻视频帧之间运动对齐的光流的启发,我们提出了一个流对齐模块(FAM),以了解相邻级别的特征映射之间的\ textit {语义流},并将高级特征广播到高分辨率特征有效地,有效地有效。 。此外,将我们的FAM与共同特征的金字塔结构集成在一起,甚至在轻量重量骨干网络(例如Resnet-18和DFNET)上也表现出优于其他实时方法的性能。然后,为了进一步加快推理过程,我们还提出了一个新型的封闭式双流对齐模块,以直接对齐高分辨率特征图和低分辨率特征图,在该图中我们将改进版本网络称为SFNET-LITE。广泛的实验是在几个具有挑战性的数据集上进行的,结果显示了SFNET和SFNET-LITE的有效性。特别是,建议的SFNET-LITE系列在使用RESNET-18主链和78.8 MIOU以120 fps运行的情况下,使用RTX-3090上的STDC主链在120 fps运行时,在60 fps运行时达到80.1 miou。此外,我们将四个具有挑战性的驾驶数据集(即CityScapes,Mapillary,IDD和BDD)统一到一个大数据集中,我们将其命名为Unified Drive细分(UDS)数据集。它包含不同的域和样式信息。我们基准了UDS上的几项代表性作品。 SFNET和SFNET-LITE仍然可以在UDS上取得最佳的速度和准确性权衡,这在如此新的挑战性环境中是强大的基准。所有代码和模型均可在https://github.com/lxtgh/sfsegnets上公开获得。
translated by 谷歌翻译
我们专注于在不同情况下在车道检测中桥接域差异,以大大降低自动驾驶的额外注释和重新训练成本。关键因素阻碍了跨域车道检测的性能改善,即常规方法仅着眼于像素损失,同时忽略了泳道的形状和位置验证阶段。为了解决该问题,我们提出了多级域Adaptation(MLDA)框架,这是一种在三个互补语义级别的像素,实例和类别的互补语义级别处理跨域车道检测的新观点。具体而言,在像素级别上,我们建议在自我训练中应用跨级置信度限制,以应对车道和背景的不平衡置信分布。在实例层面上,我们超越像素,将分段车道视为实例,并通过三胞胎学习促进目标域中的判别特征,这有效地重建了车道的语义环境,并有助于减轻特征混乱。在类别级别,我们提出了一个自适应域间嵌入模块,以在自适应过程中利用泳道的先验位置。在两个具有挑战性的数据集(即Tusimple和Culane)中,我们的方法将车道检测性能提高了很大的利润率,与先进的领域适应算法相比,精度分别提高了8.8%和F1级的7.4%。
translated by 谷歌翻译
在本文中,我们提出了一种先进的方法,用于针对单眼3D车道检测的问题,通过在2D至3D车道重建过程下利用几何结构。受到先前方法的启发,我们首先分析了3D车道与其2D表示之间的几何启发式,并提议根据先验的结构进行明确的监督,这使建立车上和车内的关系可以实现,以促进促进。从本地到全球的3D车道的重建。其次,为了减少2D车道表示中的结构损失,我们直接从前视图图像中提取顶视车道信息,从而极大地缓解了以前方法中遥远的车道特征的混淆。此外,我们通过在管道中综合新的培训数据来分割和重建任务,以应对相机姿势和地面斜率的不平衡数据分布,以改善对看不见的数据的概括,以应对我们的管道中的分割和重建任务,以对抗分割和重建任务,从而提出了一种新颖的任务数据增强方法。我们的工作标志着首次尝试使用几何信息到基于DNN的3D车道检测中的尝试,并使其可用于检测超长距离的车道,从而使原始检测范围增加一倍。提出的方法可以由其他框架平稳地采用,而无需额外的成本。实验结果表明,我们的工作表现优于Apollo 3D合成数据集的最先进方法以82 fps的实时速度在不引入额外参数的情况下实时速度为3.8%。
translated by 谷歌翻译
全景部分分割(PPS)旨在将泛型分割和部分分割统一为一个任务。先前的工作主要利用分离的方法来处理事物,物品和部分预测,而无需执行任何共享的计算和任务关联。在这项工作中,我们旨在将这些任务统一在架构层面上,设计第一个名为Panoptic-Partformer的端到端统一方法。特别是,由于视觉变压器的最新进展,我们将事物,内容和部分建模为对象查询,并直接学会优化所有三个预测作为统一掩码的预测和分类问题。我们设计了一个脱钩的解码器,以分别生成零件功能和事物/东西功能。然后,我们建议利用所有查询和相应的特征共同执行推理。最终掩码可以通过查询和相应特征之间的内部产品获得。广泛的消融研究和分析证明了我们框架的有效性。我们的全景局势群体在CityScapes PPS和Pascal Context PPS数据集上实现了新的最新结果,至少有70%的GFLOPS和50%的参数降低。特别是,在Pascal上下文PPS数据集上采用SWIN Transformer后,我们可以通过RESNET50骨干链和10%的改进获得3.4%的相对改进。据我们所知,我们是第一个通过\ textit {统一和端到端变压器模型来解决PPS问题的人。鉴于其有效性和概念上的简单性,我们希望我们的全景贡献者能够充当良好的基准,并帮助未来的PPS统一研究。我们的代码和型号可在https://github.com/lxtgh/panoptic-partformer上找到。
translated by 谷歌翻译
人类时尚理解是一项至关重要的计算机视觉任务,因为它具有用于现实世界应用的全面信息。这种关注人类时装细分和属性识别。与以前的作品相反,将每个任务分别建模为多头预测问题,我们的见解是通过Vision Transformer建模将这两个任务用一个统一的模型桥接,以使每个任务受益。特别是,我们介绍了分割的对象查询和属性预测的属性查询。查询及其相应的功能都可以通过掩码预测链接。然后,我们采用两流查询学习框架来学习解耦的查询表示。我们为属性流设计了一种新颖的多层渲染模块,以探索更细粒度的功能。解码器设计与DETR具有相同的精神。因此,我们将提出的方法\ textit {fahsionformer}命名。在三个人类时尚数据集上进行的广泛实验说明了我们方法的有效性。特别是,在\ textit {a intivit {a intim trictric(ap $^{\ text {mask}} _ {_ {\ text {iou+f text {iou+f textiT { } _1} $)用于分割和属性识别}。据我们所知,我们是人类时装分析的第一个统一的端到端视觉变压器框架。我们希望这种简单而有效的方法可以作为时尚分析的新灵活基准。代码可从https://github.com/xushilin1/fashionformer获得。
translated by 谷歌翻译
Detection Transformer (DETR) and Deformable DETR have been proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance as previous complex hand-crafted detectors. However, their performance on Video Object Detection (VOD) has not been well explored. In this paper, we present TransVOD, the first end-to-end video object detection system based on spatial-temporal Transformer architectures. The first goal of this paper is to streamline the pipeline of VOD, effectively removing the need for many hand-crafted components for feature aggregation, e.g., optical flow model, relation networks. Besides, benefited from the object query design in DETR, our method does not need complicated post-processing methods such as Seq-NMS. In particular, we present a temporal Transformer to aggregate both the spatial object queries and the feature memories of each frame. Our temporal transformer consists of two components: Temporal Query Encoder (TQE) to fuse object queries, and Temporal Deformable Transformer Decoder (TDTD) to obtain current frame detection results. These designs boost the strong baseline deformable DETR by a significant margin (2 %-4 % mAP) on the ImageNet VID dataset. TransVOD yields comparable performances on the benchmark of ImageNet VID. Then, we present two improved versions of TransVOD including TransVOD++ and TransVOD Lite. The former fuses object-level information into object query via dynamic convolution while the latter models the entire video clips as the output to speed up the inference time. We give detailed analysis of all three models in the experiment part. In particular, our proposed TransVOD++ sets a new state-of-the-art record in terms of accuracy on ImageNet VID with 90.0 % mAP. Our proposed TransVOD Lite also achieves the best speed and accuracy trade-off with 83.7 % mAP while running at around 30 FPS on a single V100 GPU device. Code and models will be available for further research.
translated by 谷歌翻译
最近提出的深度感知视频Panoptic分段(DVPS)旨在预测视频中的Panoptic分段结果和深度映射,这是一个具有挑战性的场景理解问题。在本文中,我们提供了多相变压器,揭示了DVPS任务下的所有子任务。我们的方法通过基于查询的学习探讨了深度估计与Panoptic分割的关系。特别是,我们设计三个不同的查询,包括查询,填写询问和深度查询的东西。然后我们建议通过门控融合来学习这些查询之间的相关性。从实验中,我们从深度估计和Panoptic分割方面证明了我们设计的好处。由于每个物品查询还对实例信息进行了编码,因此通过具有外观学习的裁剪实例掩码功能来执行跟踪是自然的。我们的方法在ICCV-2021 BMTT挑战视频+深度轨道上排名第一。据报道,消融研究表明我们如何提高性能。代码将在https://github.com/harboryuan/polyphonicformer提供。
translated by 谷歌翻译
无监督的域适应性(UDA)旨在使标记的源域的模型适应未标记的目标域。现有的基于UDA的语义细分方法始终降低像素级别,功能级别和输出级别的域移动。但是,几乎所有这些都在很大程度上忽略了上下文依赖性,该依赖性通常在不同的领域共享,从而导致较不怀疑的绩效。在本文中,我们提出了一个新颖的环境感知混音(camix)框架自适应语义分割的框架,该框架以完全端到端的可训练方式利用了上下文依赖性的这一重要线索作为显式的先验知识,以增强对适应性的适应性目标域。首先,我们通过利用积累的空间分布和先前的上下文关系来提出上下文掩盖的生成策略。生成的上下文掩码在这项工作中至关重要,并将指导三个不同级别的上下文感知域混合。此外,提供了背景知识,我们引入了重要的一致性损失,以惩罚混合学生预测与混合教师预测之间的不一致,从而减轻了适应性的负面转移,例如早期绩效降级。广泛的实验和分析证明了我们方法对广泛使用的UDA基准的最新方法的有效性。
translated by 谷歌翻译